92 research outputs found

    von Neumann-Morgenstern and Savage Theorems for Causal Decision Making

    Full text link
    Causal thinking and decision making under uncertainty are fundamental aspects of intelligent reasoning. Decision making under uncertainty has been well studied when information is considered at the associative (probabilistic) level. The classical Theorems of von Neumann-Morgenstern and Savage provide a formal criterion for rational choice using purely associative information. Causal inference often yields uncertainty about the exact causal structure, so we consider what kinds of decisions are possible in those conditions. In this work, we consider decision problems in which available actions and consequences are causally connected. After recalling a previous causal decision making result, which relies on a known causal model, we consider the case in which the causal mechanism that controls some environment is unknown to a rational decision maker. In this setting we state and prove a causal version of Savage's Theorem, which we then use to develop a notion of causal games with its respective causal Nash equilibrium. These results highlight the importance of causal models in decision making and the variety of potential applications.Comment: Submitted to Journal of Causal Inferenc

    Detecting dressing failures using temporal–relational visual grammars

    Get PDF
    Evaluation of dressing activities is essential in the assessment of the performance of patients with psycho-motor impairments. However, the current practice of monitoring dressing activity (performed by the patients in front of the therapist) has a number of disadvantages when considering the personal nature of dressing activity as well as inconsistencies between the recorded performance of the activity and performance of the same activity carried out in the patients’ natural environment, such as their home. As such, a system that can evaluate dressing activities automatically and objectively would alleviate some of these issues. However, a number of challenges arise, including difficulties in correctly identifying garments, their position in the body (partially of fully worn) and their position in relation to other garments. To address these challenges, we have developed a novel method based on visual grammars to automatically detect dressing failures and explain the type of failure. Our method is based on the analysis of image sequences of dressing activities and only requires availability of a video recording device. The analysis relies on a novel technique which we call temporal–relational visual grammar; it can reliably recognize temporal dressing failures, while also detecting spatial and relational failures. Our method achieves 91% precision in detecting dressing failures performed by 11 subjects. We explain these results and discuss the challenges encountered during this work

    Detecting affective states in virtual rehabilitation

    Get PDF
    Virtual rehabilitation supports motor training following stroke by means of tailored virtual environments. To optimize therapy outcome, virtual rehabilitation systems automatically adapt to the different patients' changing needs. Adaptation decisions should ideally be guided by both the observable performance and the hidden mind state of the user. We hypothesize that some affective aspects can be inferred from observable metrics. Here we present preliminary results of a classification exercise to decide on 4 states; tiredness, tension, pain and satisfaction. Descriptors of 3D hand movement and finger pressure were collected from 2 post-stroke participants while they practice on a virtual rehabilitation platform. Linear Support Vector Machine models were learnt to unfold a predictive relation between observation and the affective states considered. Initial results are promising (ROC Area under the curve (mean±std): 0.713 ± 0.137). Confirmation of these opens the door to incorporate surrogates of mind state into the algorithm deciding on therapy adaptation

    Multi-label and multimodal classifier for affectve states recognition in virtual rehabilitation

    Get PDF
    Computational systems that process multiple affective states may benefit from explicitly considering the interaction between the states to enhance their recognition performance. This work proposes the combination of a multi-label classifier, Circular Classifier Chain (CCC), with a multimodal classifier, Fusion using a Semi-Naive Bayesian classifier (FSNBC), to include explicitly the dependencies between multiple affective states during the automatic recognition process. This combination of classifiers is applied to a virtual rehabilitation context of post-stroke patients. We collected data from post-stroke patients, which include finger pressure, hand movements, and facial expressions during ten longitudinal sessions. Videos of the sessions were labelled by clinicians to recognize four states: tiredness, anxiety, pain, and engagement. Each state was modelled by the FSNBC receiving the information of finger pressure, hand movements, and facial expressions. The four FSNBCs were linked in the CCC to exploit the dependency relationships between the states. The convergence of CCC was reached by 5 iterations at most for all the patients. Results (ROC AUC) of CCC with the FSNBC are over 0.940 ± 0.045 (mean ± std. deviation) for the four states. Relationships of mutual exclusion between engagement and all the other states and co-occurrences between pain and anxiety were detected and discussed
    • …
    corecore